Learning Representations for Structured Data

special session @ IJCNN 2019

Learning Representations for Structured Data is a special session at the 2019 International Joint Conference on Neural Networks (IJCNN), that will be held at the InterContinental Budapest Hotel in Budapest, Hungary on July 14-19, 2019.

Call for papers

Structured data, e.g. sequences, trees and graphs, are a natural representation for compound information made of atomic information pieces (i.e. the nodes and their labels) and their intertwined relationships, represented by the edges (and their labels). Sequences are simple structures representing linear dependencies such as in genomics and proteomics, or with time series data. Trees, on the other hand, allow to model hierarchical contexts and relationships, such as with natural language sentences, crystallographic structures, images. Graphs are the most general and complex form of structured data allowing to represent networks of interacting elements, e.g. in social graphs or metabolomics, as well as data where topological variations influence the feature of interest, e.g. molecular compounds. Being able to process data in these rich structured forms provides a fundamental advantage when it comes to identifying data patterns suitable for predictive and/or explorative analyses. This has motivated a recent increasing interest of the machine learning community into the development of learning models for structured information.

On the other hand, recent improvements in the predictive performances shown by machine learning methods is due to their ability, in contrast to traditional approaches, to learn a “good” representation for the task under consideration. Deep Learning techniques are nowadays widespread, since they allow to perform such representation learning in an end-to-end fashion. Nonetheless, representations learning is becoming of great importance in other areas, such in kernel-based and probabilistic models. It has also been shown that, when the data available for the task at hand is limited, it is still beneficial to resort to representations learned in an unsupervised fashion, or on different, but related, tasks.

This session focuses on learning representation for structured data such as sequences, trees, graphs, and relational data. Topics that are of interest to this session include, but are not limited to:

  • Probabilistic models for structured data
  • Structured output generation (probabilistic models, variational autoencoders, adversarial training, …)
  • Deep learning and representation learning for structures
  • Learning with network data
  • Recurrent, recursive and contextual models
  • Reservoir computing and randomized neural networks for structures
  • Kernels for structured data
  • Relational deep learning
  • Learning implicit representations
  • Applications of adaptive structured data processing: e.g. Natural Language Processing, machine vision (e.g. point clouds as graphs), materials science, chemoinformatics, computational biology, social networks.

Important Dates

Paper Submissions: December 15, 2018

Paper Acceptance Notifications: January 30, 2019

Session Organisers

Davide Bacciu, University of Pisa

Thomas Gärtner, University of Nottingham

Nicolò Navarin, University of Padova

Alessandro Sperduti, University of Padova

For any enquire, please write to: bacciu [at] di.unipi.it or nnavarin [at] math.unipd.it